Latent Space Oddity: on the Curvature of Deep Generative Models
نویسندگان
چکیده
Deep generative models provide a systematic way to learn nonlinear data distributions through a set of latent variables and a nonlinear “generator” function that maps latent points into the input space. The nonlinearity of the generator implies that the latent space gives a distorted view of the input space. Under mild conditions, we show that this distortion can be characterized by a stochastic Riemannian metric, and we demonstrate that distances and interpolants are significantly improved under this metric. This in turn improves probability distributions, sampling algorithms and clustering in the latent space. Our geometric analysis further reveals that current generators provide poor variance estimates and we propose a new generator architecture with vastly improved variance estimates. Results are demonstrated on convolutional and fully connected variational autoencoders, but the formalism easily generalizes to other deep generative models.
منابع مشابه
The Riemannian Geometry of Deep Generative Models
Deep generative models learn a mapping from a lowdimensional latent space to a high-dimensional data space. Under certain regularity conditions, these models parameterize nonlinear manifolds in the data space. In this paper, we investigate the Riemannian geometry of these generated manifolds. First, we develop efficient algorithms for computing geodesic curves, which provide an intrinsic notion...
متن کاملNatural-Gradient Stochastic Variational Inference for Non-Conjugate Structured Variational Autoencoder
We propose a new variational inference method which uses recognition models for amortized inference in graphical models that contain deep generative models. Unlike many existing approaches, our method can handle non-conjugacy in both the latent graphical model and the deep generative model, and enables fully amortized inference at test time. Our method is based on an extension of a recently pro...
متن کاملMetrics for Deep Generative Models
Neural samplers such as variational autoencoders (VAEs) or generative adversarial networks (GANs) approximate distributions by transforming samples from a simple random source—the latent space—to samples from a more complex distribution represented by a dataset. While the manifold hypothesis implies that the density induced by a dataset contains large regions of low density, the training criter...
متن کاملDeep Regression Bayesian Network and Its Applications
Deep directed generative models have attracted much attention recently due to their generative modeling nature and powerful data representation ability. In this paper, we review different structures of deep directed generative models and the learning and inference algorithms associated with the structures. We focus on a specific structure that consists of layers of Bayesian Networks due to the ...
متن کاملLatent Space of Generative Models
Several recent papers have treated the latent space of deep generative models, e.g., GANs or VAEs, as Riemannian manifolds. The argument is that operations such as interpolation are better done along geodesics that minimize path length not in the latent space but in the output space of the generator. However, this implicitly assumes that some simple metric such as L2 is meaningful in the output...
متن کامل